Current Issue : July - September Volume : 2017 Issue Number : 3 Articles : 5 Articles
In this work, a new combined vision technique (CVT) is proposed, comprehensively\ndeveloped, and experimentally tested for stable, precise unmanned micro aerial vehicle (MAV)\npose estimation. The CVT combines two measurement methods (multi- and mono-view) based on\ndifferent constraint conditions. These constraints are considered simultaneously by the particle filter\nframework to improve the accuracy of visual positioning. The framework, which is driven by an\nonboard inertial module, takes the positioning results from the visual system as measurements and\nupdates the vehicle state. Moreover, experimental testing and data analysis have been carried out to\nverify the proposed algorithm, including multi-camera configuration, design and assembly of MAV\nsystems, and the marker detection and matching between different views. Our results indicated that\nthe combined vision technique is very attractive for high-performance MAV pose estimation....
Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision\napplications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common\nlimitation; that is, themodel is only validwhen the atmosphere is homogeneous. In this paper, we propose an improved atmospheric\nscattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also\npresented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy\nimage into scenes according to the haze density similarity.Then, in order to improve the atmospheric light estimation accuracy, we\ndefine an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore\navoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP), which is a\nstatistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model,\nwe can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo.The experimental results verify\nthat our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods\nin terms of both robustness and effectiveness....
Advanced vision solutions enable manufacturers in the technology sector to reconcile both competitive and regulatory concerns\nand address the need for immaculate fault detection and quality assurance.Themodernmanufacturing has completely shifted from\nthe manual inspections to the machine assisted vision inspection methodology. Furthermore, the research outcomes in industrial\nautomation have revolutionized the whole product development strategy. The purpose of this research paper is to introduce a new\nscheme of automation in the sand casting process by means of machine vision based technology for mold positioning. Automation\nhas been achieved by developing a novel system in which castingmolds of different sizes, having different pouring cup location and\nradius, position themselves in front of the induction furnace such that the center of pouring cup comes directly beneath the pouring\npoint of furnace.The coordinates of the center of pouring cup are found by using computer vision algorithms. The output is then\ntransferred to a microcontroller which controls the alignment mechanism on which the mold is placed at the optimum location....
This article concerns the problems of a defective depth map and limited field of view of Kinect-style RGB-D sensors. An\nanisotropic diffusion based hole-filling method is proposed to recover invalid depth data in the depth map. The field of\nview of the Kinect-style RGB-D sensor is extended by stitching depth and color images from several RGB-D sensors. By\naligning the depth map with the color image, the registration data calculated by registering color images can be used to\nstitch depth and color images into a depth and color panoramic image concurrently in real time. Experiments show that\nthe proposed stitching method can generate a RGB-D panorama with no invalid depth data and little distortion in real time\nand can be extended to incorporate more RGB-D sensors to construct even a 360 field of view panoramic RGB-D image....
Discriminative tracking methods use binary classification to discriminate between the foreground and background and have\nachieved some useful results. However, the use of labeled training samples is insufficient for them to achieve accurate tracking.\nHence, discriminative classifiersmust use their own classification results to update themselves,whichmay lead to feedback-induced\ntracking drift. To overcome these problems, we propose a semisupervised tracking algorithm that uses deep representation and\ntransfer learning. Firstly, a 2D multilayer deep belief network is trained with a large amount of unlabeled samples. The nonlinear\nmapping point at the top of this network is subtracted as the feature dictionary. Then, this feature dictionary is utilized to transfer\ntrain and update a deep tracker. The positive samples for training are the tracked vehicles, and the negative samples are the\nbackground images. Finally, a particle filter is used to estimate vehicle position.We demonstrate experimentally that our proposed\nvehicle tracking algorithm can effectively restrain drift while also maintaining the adaption of vehicle appearance. Compared with\nsimilar algorithms, our method achieves a better tracking success rate and fewer average central-pixel errors....
Loading....